Goto

Collaborating Authors

 conferenceon neural information processing system






A Linear Speedup Analysis of Distributed Deep Learning with Sparse and Quantized Communication

Peng Jiang, Gagan Agrawal

Neural Information Processing Systems

Algorithm Thei Requirinitialx0,i, 1: forj =0 ,1,2,..., 1do 2: Randomlymtraining 3: Compute 4: Update 5: if((j+ 1)p)=0 then 6: Compute 7: Quantize 8: Av 9: Update 10: end 11: end Inthe achie O(1/ p MK)con limited impair gradient 2-bit ratio 32/2 =(if We the communicate issho each parameters.





N Accelerating

Neural Information Processing Systems

Specifically, forthesearchspacesandtasks, we use NAS-Bench-101 (CIFAR-10), NAS-Bench-201 (CIFAR-10, CIFAR-100, and ImageNet16-120), NAS-Bench-301 (CIFAR-10), and TransN AS-Bench-101 Microand Macro (Jigsaw, Object Classification, Scene Classification, Autoencoder) from NAS-Bench-Suite. Weconsiderall 44Karchitecturesreferencedin Table 2. See Table 3 and Appendix Dforthefullresults.